123 research outputs found

    Teacher Enrollment in MITx MOOCs: Are We Educating Educators?

    Get PDF
    Participants in Massive Open Online Courses (MOOCs) come from an incredibly diverse set of backgrounds and act with a wide range of intentions (Christensen 2013, Ho 2014). Interestingly, our own recent surveys of 11 MITx courses on edX in the spring of 2014 show that teachers (versus traditional college students) are a significant fraction of MITx MOOC participants. This suggests many ways to improve and harness MOOCs, including the potential arising from the collective professional experience of participants, opportunities for facilitating educator networks, MOOCs as a venue for expert-novice interactions, and possible added value from enhancing teacher experience through accreditation models and enabling individual teacher re- use of MOOC content. Here, we present data in detail from these teacher enrollment surveys, illuminate teacher participation in discussion forums, and draw lessons for improving the utility of MOOCs for teachers

    Beyond Prediction: First Steps Toward Automatic Intervention in MOOC Student Stopout

    Get PDF
    ABSTRACT High attrition rates in massive open online courses (MOOCs) have motivated growing interest in the automatic detection of student "stopout". Stopout classifiers can be used to orchestrate an intervention before students quit, and to survey students dynamically about why they ceased participation. In this paper we expand on existing stop-out detection research by (1) exploring important elements of classifier design such as generalizability to new courses; (2) developing a novel framework inspired by control theory for how to use a classifier's outputs to make intelligent decisions; and (3) presenting results from a "dynamic survey intervention" conducted on 2 HarvardX MOOCs, containing over 40000 students, in early 2015. Our results suggest that surveying students based on an automatic stopout classifier achieves higher response rates compared to traditional post-course surveys, and may boost students' propensity to "come back" into the course

    HarvardX and MITx: Two Years of Open Online Courses Fall 2012-Summer 2014

    Get PDF
    What happens when well-known universities offer online courses, assessments, and certificates of completion for free? Early descriptions of Massive Open Online Courses (MOOCs) have emphasized large enrollments, low certification rates, and highly educated registrants. We use data from two years and 68 open online courses offered by Harvard University (via HarvardX) and MIT (via MITx) to broaden the scope of answers to this question. We describe trends over this two-year span, depict participant intent using comprehensive survey instruments, and chart course participation pathways using network analysis. We find that overall participation in our MOOCs remains substantial and that the average growth has been steady. We explore how diverse audiences — including explorers, teachers-as-learners, and residential students — provide opportunities to advance the principles on which HarvardX and MITx were founded: access, research, and residential education

    Skyline: Interactive In-Editor Computational Performance Profiling for Deep Neural Network Training

    Full text link
    Training a state-of-the-art deep neural network (DNN) is a computationally-expensive and time-consuming process, which incentivizes deep learning developers to debug their DNNs for computational performance. However, effectively performing this debugging requires intimate knowledge about the underlying software and hardware systems---something that the typical deep learning developer may not have. To help bridge this gap, we present Skyline: a new interactive tool for DNN training that supports in-editor computational performance profiling, visualization, and debugging. Skyline's key contribution is that it leverages special computational properties of DNN training to provide (i) interactive performance predictions and visualizations, and (ii) directly manipulatable visualizations that, when dragged, mutate the batch size in the code. As an in-editor tool, Skyline allows users to leverage these diagnostic features to debug the performance of their DNNs during development. An exploratory qualitative user study of Skyline produced promising results; all the participants found Skyline to be useful and easy to use.Comment: 14 pages, 5 figures. Appears in the proceedings of UIST'2

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202
    • …
    corecore